37 research outputs found
Enhancing cross layer monitoring on open optical transport networks
Continuous monitoring of key network elements is instrumental in intelligent control and predictive analysis. This demonstration illustrates implementation challenges that are encountered in cross-layer monitoring of optical transport networks in an open-source network operations platform
Traffic-Profile and Machine Learning Based Regional Data Center Design and Operation for 5G Network
Data center in the fifth generation (5G) network will
serve as a facilitator to move the wireless communication industry
from a proprietary hardware based approach to a more software
oriented environment. Techniques such as Software defined
networking (SDN) and network function virtualization (NFV)
would be able to deploy network functionalities such as service
and packet gateways as software. These virtual functionalities
however would require computational power from data centers.
Therefore, these data centers need to be properly placed and
carefully designed based on the volume of traffic they are meant
to serve. In this work, we first divide the city of Milan, Italy
into different zones using K-means clustering algorithm. We then
analyse the traffic profiles of these zones in the city using a
network operator’s Open Big Data set. We identify the optimal
placement of data centers as a facility location problem and
propose the use of Weiszfeld’s algorithm to solve it. Furthermore,
based on our analysis of traffic profiles in different zones, we
heuristically determine the ideal dimension of the data center in
each zone. Additionally, to aid operation and facilitate dynamic
utilization of data center resources, we use the state of the art
recurrent neural network models to predict the future traffic
demands according to past demand profiles of each area
On the Application of Explainable Artificial Intelligence to Lightpath QoT Estimation
We demonstrate the potentialities of explainable AI when applied to distill knowledge from a trained supervised machine learning model for lightpath quality of transmission estimation in optical networks, with synthetic datasets
Experimental Demonstration and Results of Cross-layer Monitoring Using OpenNOP: an Open Source Network Observability Platform
Ensuring the smooth operation and optimal performance of communication networks requires continuous moni-
toring of key network elements. Network operators can detect and prevent potential issues by monitoring various
real-time network parameters. This paper proposes and presents results from the implementation of a cross-
layer monitoring system for OpenROADM-compliant optical transport networks using an open source network
observability platform called OpenNOP, and for the first time includes simultaneous optical layer and transport
layer metrics. It leverages open source tools as a cost-effective and efficient solution for network monitoring and
management. OpenNOP collects and analyzes data from various network layers, including physical, data link,
network, and transport layers. OpenNOP can also ingest status and log information. This data is all stored in
a common time-series database. The results show that OpenNOP can provide comprehensive network visibility
and effective cross-layer monitoring of OpenROADM-based networks
Performance Characterization and Profiling of Chained CPU-bound Virtual Network Functions
The increased demand for high-quality Internet connectivity resulting from the growing number of connected devices and advanced services has put significant strain on telecommunication networks. In response, cutting-edge technologies such as Network Function Virtualization (NFV) and Software Defined Networking (SDN) have been introduced to transform network infrastructure. These innovative solutions offer dynamic, efficient, and easily manageable networks that surpass traditional approaches. To fully realize the benefits of NFV and maintain the performance level of specialized equipment, it is critical to assess the behavior of Virtual Network Functions (VNFs) and the impact of virtualization overhead. This paper delves into understanding how various factors such as resource allocation, consumption, and traffic load impact the performance of VNFs. We aim to provide a detailed analysis of these factors and develop analytical functions to accurately describe their impact. By testing VNFs on different testbeds, we identify the key parameters and trends, and develop models to generalize VNF behavior. Our results highlight the negative impact of resource saturation on performance and identify the CPU as the main bottleneck. We also propose a VNF profiling procedure as a solution to model the observed trends and test more complex VNFs deployment scenarios to evaluate the impact of interconnection, co-location, and NFV infrastructure on performance
Processing ANN Traffic Predictions for RAN Energy Efficiency
The field of networking, like many others, is experiencing a peak of interest in the use of Machine Learning (ML) algorithms. In this paper, we focus on the application of ML tools to resource management in a portion of a Radio Access Network (RAN) and, in particular, to Base Station (BS) activation and deactivation, aiming at reducing energy consumption while providing enough capacity to satisfy the variable traffic demand generated by end users. In order to properly decide on BS (de)activation, traffic predictions are needed, and Artificial Neural Networks (ANN) are used for this purpose. Since critical BS (de)activation decisions are not taken in proximity of minima and maxima of the traffic patterns, high accuracy in the traffic estimation is not required at those times, but only close to the times when a decision is taken. This calls for careful processing of the ANN traffic predictions to increase the probability of correct decision. Numerical performance results in terms of energy saving and traffic lost due to incorrect BS deactivations are obtained by simulating algorithms for traffic predictions processing, using real traffic as input. Results suggest that good performance trade-offs can be achieved even in presence of non-negligible traffic prediction errors, if these forecasts are properly processed
Dynamic Virtual Network Function Placement over a Software-Defined Optical Network
We demonstrate how to dynamically place Virtual Network Functions over a software defined optical network integrating IT computing and real IP over WDM resources, thus allowing exchange of real traffic
Admission Control and Virtual Network Embedding in 5G Networks: A Deep Reinforcement-Learning Approach
Fifth-generation (5G) networks are already available in major urban areas and are expected to bring a major transformation to citizens' lives. 5G services, such as enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), and massive machine-type communications (mMTC), require a network infrastructure capable of supporting stringent requirements in terms of latency and bandwidth demands; as such, it must be highly dynamic and flexible. Network slicing is a key enabler technology that can provide dynamic and flexible characteristics to 5G network architecture. A network slice (NS) can be defined as a partition of network and IT resources, that is, network links and nodes capacity dedicated to a specific set of service demands. As a result, different NSs can coexist over the same physical infrastructure network and can be used to dynamically and flexibly deploy the aforementioned 5G services. However, to efficiently implement NSs with different requirements, communication service providers (CSPs) that own the physical infrastructure network must adopt sophisticated techniques for admission control and resource allocation of NSs. In this paper, we present a novel framework for admission control and resource allocation of 5G NSs in metro-core networks. Specifically, our framework is based on a deep reinforcement learning (DRL) algorithm called Advantage Actor Critic (A2C), which performs admission control, i.e. it is capable of learning which slice to admit based on the availability of the physical network resources. Then, given the diversity of requirements for each 5G service, we propose different resource allocation algorithms based on integer linear programming (ILP) and heuristics to treat each service accordingly. Results show that our proposed framework can increase the number of admitted NSs with respect to the case in which the admission control is disabled by improving the resource allocation performance
Experimental Demonstration of ML-Based DWDM System Margin Estimation
SNR margins between partially and fully loaded DWDM systems are estimated
without detailed knowledge of the network. The ML model, trained on simulation
data, achieves accurate predictions on experimental data with an RMSE of 0.16
dB.Comment: This work has been partially funded by the German Federal Ministry of
Education and Research in the CELTIC-NEXT project AI-NET-PROTECT
(#16KIS1279K) and in the programme of "Souver\"an. Digital. Vernetzt." joint
project 6G-life (#16KISK002). Work was also funded by Science Foundation
Ireland projects OpenIreland (18/RI/5721) and 13/RC/2077 p